5 research outputs found
Lotus: Serverless In-Transit Data Processing for Edge-based Pub/Sub
Publish-subscribe systems are a popular approach for edge-based IoT use
cases: Heterogeneous, constrained edge devices can be integrated easily, with
message routing logic offloaded to edge message brokers. Message processing,
however, is still done on constrained edge devices. Complex content-based
filtering, the transformation between data representations, or message
extraction place a considerable load on these systems, and resulting
superfluous message transfers strain the network.
In this paper, we propose Lotus, adding in-transit data processing to an edge
publish-subscribe middleware in order to offload basic message processing from
edge devices to brokers. Specifically, we leverage the Function-as-a-Service
paradigm, which offers support for efficient multi-tenancy, scale-to-zero, and
real-time processing. With a proof-of-concept prototype of Lotus, we validate
its feasibility and demonstrate how it can be used to offload sensor data
transformation to the publish-subscribe messaging middleware
ProFaaStinate: Delaying Serverless Function Calls to Optimize Platform Performance
Function-as-a-Service (FaaS) enables developers to run serverless
applications without managing operational tasks. In current FaaS platforms,
both synchronous and asynchronous calls are executed immediately. In this
paper, we present ProFaaStinate, which extends serverless platforms to enable
delayed execution of asynchronous function calls. This allows platforms to
execute calls at convenient times with higher resource availability or lower
load. ProFaaStinate is able to optimize performance without requiring deep
integration into the rest of the platform, or a complex systems model. In our
evaluation, our prototype built on top of Nuclio can reduce request response
latency and workflow duration while also preventing the system from being
overloaded during load peaks. Using a document preparation use case, we show a
54% reduction in average request response latency. This reduction in resource
usage benefits both platforms and users as cost savings.Comment: Accepted for publication in Proc. of 9th International Workshop on
Serverless Computing (WoSC 23
Managing Data Replication and Distribution in the Fog with FReD
The heterogeneous, geographically distributed infrastructure of fog computing
poses challenges in data replication, data distribution, and data mobility for
fog applications. Fog computing is still missing the necessary abstractions to
manage application data, and fog application developers need to re-implement
data management for every new piece of software. Proposed solutions are limited
to certain application domains, such as the IoT, are not flexible in regard to
network topology, or do not provide the means for applications to control the
movement of their data.
In this paper, we present FReD, a data replication middleware for the fog.
FReD serves as a building block for configurable fog data distribution and
enables low-latency, high-bandwidth, and privacy-sensitive applications. FReD
is a common data access interface across heterogeneous infrastructure and
network topologies, provides transparent and controllable data distribution,
and can be integrated with applications from different domains. To evaluate our
approach, we present a prototype implementation of FReD and show the benefits
of developing with FReD using three case studies of fog computing applications
Fusionize: Improving Serverless Application Performance through Feedback-Driven Function Fusion
Serverless computing increases developer productivity by removing operational concerns such as managing hardware or software runtimes. Developers, however, still need to partition their application into functions, which can be error-prone and adds complexity: Using a small function size where only the smallest logical unit of an application is inside a function maximizes flexibility and reusability. Yet, having small functions leads to invocation overheads, additional cold starts, and may increase cost due to double billing during synchronous invocations. In this paper we present Fusionize, a framework that removes these concerns from developers by automatically fusing the application code into a multi-function orchestration with varying function size. Developers only need to write the application code following a lightweight programming model and do not need to worry how the application is turned into functions. Our framework automatically fuses different parts of the application into functions and manages their interactions. Leveraging monitoring data, the framework optimizes the distribution of application parts to functions to optimize deployment goals such as end-to-end latency and cost. Using two example applications, we show that Fusionizecan automatically and iteratively improve the deployment artifacts of the application
Streaming vs. Functions: A Cost Perspective on Cloud Event Processing
In cloud event processing, data generated at the edge is processed in real-time by cloud resources. Both distributed stream processing (DSP) and Function-as-a-Service (FaaS) have been proposed to implement such event processing applications. FaaS emphasizes fast development and easy operation, while DSP emphasizes efficient handling of large data volumes. Despite their architectural differences, both can be used to model and implement loosely-coupled job graphs. In this paper, we consider the selection of FaaS and DSP from a cost perspective. We implement stateless and stateful workflows from the Theodolite benchmarking suite using cloud FaaS and DSP. In an extensive evaluation, we show how application type, cloud service provider, and runtime environment can influence the cost of application deployments and derive decision guidelines for cloud engineers